A democratic way of controlling artificial general intelligence
نویسندگان
چکیده
Abstract The problem of controlling an artificial general intelligence (AGI) has fascinated both scientists and science-fiction writers for centuries. Today that is becoming more important because the time when we may have a superhuman among us within foreseeable future. Current average estimates place moment to before 2060. Some it as early 2040, which quite soon. arrival first AGI might lead series events not seen before: rapid development even powerful developed by AGIs themselves. This wide-ranging implications society therefore something must be studied well happens. In this paper will discuss limiting risks posed advent AGIs. thought experiment, propose enough human-like properties act in democratic society, while still retaining its essential properties. We ways arranging co-existence humans such using system coordination coexistence. If considered success, could used manage consisting humans. where each member represented highest level decision-making guarantees minorities would able their voices heard. unpredictability era makes necessary consider possibility population autonomous make into minority.
منابع مشابه
Risks of general artificial intelligence
The papers in this special volume of the Journal of Experimental and Theoretical Artificial Intelligence are the outcome of a conference on the ‘Impacts and Risks of Artificial General Intelligence’ (AGI-Impacts) that took place at the University of Oxford, St Anne’s College, on 10 and 11 December 2012 – jointly with the fifth annual conference on ‘Artificial General Intelligence’ (AGI-12). The...
متن کاملFacets of Artificial General Intelligence
We argue that time has come for a serious endeavor to work towards artificial general intelligence (AGI). This positive assessment of the very possibility of AGI has partially its roots in the development of new methodological achievements in the AI area, like new learning paradigms and new integration techniques for different methodologies. The article sketches some of these methods as prototy...
متن کاملSelf-Regulating Artificial General Intelligence
This paper examines the paperclip apocalypse concern for artificial general intelligence. This arises when a superintelligent AI with a simple goal (ie., producing paperclips) accumulates power so that all resources are devoted towards that goal and are unavailable for any other use. Conditions are provided under which a paper apocalypse can arise but the model also shows that, under certain ar...
متن کاملA Foundational Architecture for Artificial General Intelligence
Implementing and fleshing out a number of psychological and neuroscience theories of cognition, the LIDA conceptual model aims at being a cognitive “theory of everything.” With modules or processes for perception, working memory, episodic memories, “consciousness,” procedural memory, action selection, perceptual learning, episodic learning, deliberation, volition, and non-routine problem solvin...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: AI & society
سال: 2022
ISSN: ['0951-5666', '1435-5655']
DOI: https://doi.org/10.1007/s00146-022-01426-x